Experts Worry About News Stories Written with Artificial Intelligence

2023-08-04

00:00 / 00:00
复读宝 RABC v8.0beta 复读机按钮使用说明
播放/暂停
停止
播放时:倒退3秒/复读时:回退AB段
播放时:快进3秒/复读时:前进AB段
拖动:改变速度/点击:恢复正常速度1.0
拖动改变复读暂停时间
点击:复读最近5秒/拖动:改变复读次数
设置A点
设置B点
取消复读并清除AB点
播放一行
停止播放
后退一行
前进一行
复读一行
复读多行
变速复读一行
变速复读多行
LRC
TXT
大字
小字
滚动
全页
1
  • Google is developing an artificial intelligence tool that produces news articles. And the development is troubling some experts.
  • 2
  • They say such tools risk spreading propaganda or threatening the safety of people who provide reporters with information, or sources.
  • 3
  • The New York Times reported last week that Google is testing a new product named Genesis.
  • 4
  • It uses artificial intelligence (AI) to produce news stories, or articles.
  • 5
  • The New York Times said Genesis gathers information, like details about current events, to create news stories.
  • 6
  • Google already has pitched the product to the Times and other organizations, including The Washington Post and News Corp., which owns The Wall Street Journal newspaper.
  • 7
  • The launch of AI chatbot ChatGPT last fall created a debate about how AI ought to be used in the news industry.
  • 8
  • AI tools can help reporters research by quickly examining data from large computer files.
  • 9
  • AI can also help reporters confirm or disprove information from sources.
  • 10
  • But there is fear that AI tools could spread propaganda and cause journalists to lose the skill of reporting.
  • 11
  • John Scott-Railton studies disinformation at the Citizen Lab, part of the University of Toronto.
  • 12
  • He told VOA that a lot of information gathered by AI comes from places on the internet "where disinformation and propaganda get targeted."
  • 13
  • Paul M. Barrett is with New York University's Stern Center for Business and Human Rights.
  • 14
  • He agrees that AI could increase the spread of false information.
  • 15
  • "It's going to be easier to generate myths and disinformation," he told VOA.
  • 16
  • "The supply of misleading content is, I think, going to go up."
  • 17
  • In an emailed statement to VOA, a Google spokesperson said its tool is designed to help reporters, or journalists, in their work, not replace them.
  • 18
  • AI cannot replace the "role journalists have in reporting, creating and fact-checking their articles," the spokesperson said.
  • 19
  • AI tools in journalism could also hurt trust in news organizations.
  • 20
  • Public opinion researchers Gallup and the Knight Foundation released a study in February about trust in the media.
  • 21
  • It said that half of Americans believe that national news organizations try to mislead or misinform the public.
  • 22
  • "I'm puzzled that anyone thinks that the solution to this problem is to introduce a much less credible tool," said Scott-Railton.
  • 23
  • He earlier received support from Google's research group Google Ideas.
  • 24
  • Reports say that AI chatbots regularly produce answers that are wrong or made up.
  • 25
  • Digital experts are also concerned about the security risks of using AI tools to produce news stories.
  • 26
  • Reporters would have to be careful not to reveal to AI systems information like "the identity of a confidential source, or, I would say, even information that the journalist wants to make sure doesn't become public," Barrett said.
  • 27
  • Scott-Railton said he thinks AI probably could be used in most industries.
  • 28
  • But it is important not to introduce the technology too quickly, especially in the news.
  • 29
  • "What scares me is that the lessons learned in this case will come at the cost of well-earned reputations, will come at the cost of factual accuracy when it actually counts," he said.
  • 30
  • I'm Dan Novak.
  • 1
  • Google is developing an artificial intelligence tool that produces news articles. And the development is troubling some experts.
  • 2
  • They say such tools risk spreading propaganda or threatening the safety of people who provide reporters with information, or sources.
  • 3
  • The New York Times reported last week that Google is testing a new product named Genesis. It uses artificial intelligence (AI) to produce news stories, or articles.
  • 4
  • The New York Times said Genesis gathers information, like details about current events, to create news stories. Google already has pitched the product to the Times and other organizations, including The Washington Post and News Corp., which owns The Wall Street Journal newspaper.
  • 5
  • The launch of AI chatbot ChatGPT last fall created a debate about how AI ought to be used in the news industry.
  • 6
  • AI tools can help reporters research by quickly examining data from large computer files. AI can also help reporters confirm or disprove information from sources. But there is fear that AI tools could spread propaganda and cause journalists to lose the skill of reporting.
  • 7
  • John Scott-Railton studies disinformation at the Citizen Lab, part of the University of Toronto. He told VOA that a lot of information gathered by AI comes from places on the internet "where disinformation and propaganda get targeted."
  • 8
  • Paul M. Barrett is with New York University's Stern Center for Business and Human Rights. He agrees that AI could increase the spread of false information.
  • 9
  • "It's going to be easier to generate myths and disinformation," he told VOA. "The supply of misleading content is, I think, going to go up."
  • 10
  • In an emailed statement to VOA, a Google spokesperson said its tool is designed to help reporters, or journalists, in their work, not replace them. AI cannot replace the "role journalists have in reporting, creating and fact-checking their articles," the spokesperson said.
  • 11
  • AI tools in journalism could also hurt trust in news organizations. Public opinion researchers Gallup and the Knight Foundation released a study in February about trust in the media. It said that half of Americans believe that national news organizations try to mislead or misinform the public.
  • 12
  • "I'm puzzled that anyone thinks that the solution to this problem is to introduce a much less credible tool," said Scott-Railton. He earlier received support from Google's research group Google Ideas.
  • 13
  • Reports say that AI chatbots regularly produce answers that are wrong or made up. Digital experts are also concerned about the security risks of using AI tools to produce news stories.
  • 14
  • Reporters would have to be careful not to reveal to AI systems information like "the identity of a confidential source, or, I would say, even information that the journalist wants to make sure doesn't become public," Barrett said.
  • 15
  • Scott-Railton said he thinks AI probably could be used in most industries. But it is important not to introduce the technology too quickly, especially in the news.
  • 16
  • "What scares me is that the lessons learned in this case will come at the cost of well-earned reputations, will come at the cost of factual accuracy when it actually counts," he said.
  • 17
  • I'm Dan Novak.
  • 18
  • Liam Scott wrote this story for Voice of America. Dan Novak adapted it for VOA Learning English.
  • 19
  • ___________________________________________________
  • 20
  • Words in This Story
  • 21
  • pitch - v. to try to sell something to someone; to talk about something or someone in a way that will make others accept it
  • 22
  • myth - n. an idea or story believed by many people but that is not true
  • 23
  • content - n. the information that appears in media such as books, newspapers, television, movies
  • 24
  • puzzle - v. to be confused; to make something difficult to understand
  • 25
  • confidential - adj. secret or private
  • 26
  • scare - v. to make a person afraid or fearful
  • 27
  • reputation - n. the common opinion people have about a person
  • 28
  • accuracy -n. the quality of having no mistake or error